Journal of Medical Internet Research
◐ JMIR Publications Inc.
Preprints posted in the last 7 days, ranked by how well they match Journal of Medical Internet Research's content profile, based on 85 papers previously published here. The average preprint has a 0.20% match score for this journal, so anything above that is already an above-average fit.
Tian, J.; Kurkova, V.; Wu, Y.; Adu, M.; Hayward, J.; Greenshaw, A. J.; Cao, B.
Show abstract
Patient-generated streaming data from wearable and digital technologies is increasingly promoted as a means of supporting mental health monitoring and clinical decision-making. While patient acceptance of these technologies has been reported, clinician perspectives remain underexplored despite their central role in determining whether streaming data are meaningfully integrated into routine care. This study explored clinicians experiences, as well as perceived facilitators and barriers, related to integrating patient-generated streaming data into routine mental health practice. A qualitative, exploratory interview study was conducted to examine clinicians experiences and perspectives on integrating patient-generated streaming data into mental health care. Semi-structured interviews were conducted with 33 clinicians, including family physicians (n=11), psychiatrists (n=12), and psychologists (n=10). Data were analyzed using reflexive thematic analysis guided by Braun and Clarkes six-step approach. Six themes were identified. Clinicians described variable use of digital and streaming technologies, ranging from routine engagement to deliberate non-use. Streaming data were viewed as clinically valuable when they provided longitudinal and objective insights, identified physiological and behavioural pattern changes, and supported patient engagement. However, clinicians emphasized that clinical usefulness was contingent on interpretability, contextual information, and relevance to decision-making. Major barriers included poor integration with electronic medical records, time constraints, data volume, limited organizational support, and uncertainty regarding data reliability and validity. Clinicians also expressed persistent concerns about privacy, governance, and regulatory oversight, highlighting the need for clear safeguards and accountability structures. Clinicians view patient-generated streaming data as a promising adjunct to mental health care, particularly for capturing longitudinal change between visits. However, meaningful clinical integration remains constrained by usability, workflow, organizational, and regulatory challenges, as well as limited confidence in data interpretation. Addressing these barriers through improved system integration, interpretive support, validation, and governance will be essential for translating the potential of streaming data into routine clinical practice.
Kwon, C.-Y.; Lee, B.; Kim, M.; Mun, J.-h.; Seo, M.-G.; Yoon, D.
Show abstract
BackgroundHwa-byung (HB) is a Korean culture-bound syndrome characterised by prolonged suppression of anger and somatic complaints. No evidence-based digital therapeutic (DTx) has been developed for HB. We evaluated the feasibility, user experience (UX), and preliminary clinical effect of an acceptance and commitment therapy (ACT)-based DTx application, Hwa-free, for HB. MethodsAdults aged 19-80 years diagnosed with HB were enrolled in a four-week app-based intervention with assessment at baseline (Week 0), Week 2, Week 4, and Week 8 follow-up. The primary outcome was UX assessed via a 22-item survey at Week 4. Secondary outcomes included HB-related symptom and personality scales, depression, anxiety, anger expression, psychological flexibility, health-related quality of life, and heart rate variability. ResultsOf 45 screened, 30 were enrolled and 28 constituted the modified intention-to-treat population. Mean app use was 19.9 {+/-} 7.9 days (71.2% adherence over 28 days). Adverse events were infrequent and unrelated to the intervention. Positive response rates exceeded 80% for video content (items 2-4: 82.8-89.7%), HB self-assessment (86.2%), meditation therapy (86.2%), and in-app guidance (85.7%). Pre-post improvements from baseline to Week 4 were observed in 11 of 18 clinical scales, including HB Symptom Scale ({Delta} = -9.8, Cohens d = -0.92), Beck Depression Inventory-II ({Delta} = -13.3, d = -1.11), and state anger ({Delta} = -7.8, d = -0.96). The HB screening-positive rate declined from 100% at baseline to 55.6% at Week 8. ConclusionsHwa-free demonstrated adequate feasibility, acceptable UX, and preliminary evidence of clinically meaningful improvement in HB-related symptoms. Future randomised controlled trial is warranted. Trial registrationCRIS, KCT0011105
Glick, C. C.; Pirzada, S. T.; Quah, S. K.; Feldman, S.; Enabulele, I.; Madsen, S.; Billimoria, N.; Feldman, S.; Bhatia, R.; Spiegel, D.; Saggar, M.
Show abstract
BackgroundScalable, low-burden behavioral interventions are needed to address rising subclinical mental health symptoms. However, few randomized controlled trials have evaluated ultra-brief, remotely delivered, meditation using multimodal outcome assessment under real-world conditions. MethodsWe conducted a fully remote randomized controlled trial (ClinicalTrials.gov: NCT06014281) evaluating a focused-attention meditation intervention delivered via brief instructor training and independent daily practice. A total of 299 meditation-naive adults were randomized to immediate intervention or waitlist control in a delayed-intervention design. Participants practiced [≥]10 minutes daily for 8 weeks within a 16-week study. Outcomes included validated self-report measures, web-based cognitive tasks, and wearable-derived physiological metrics. ResultsAcross randomized and within-participant replication phases, the intervention was associated with significant reductions in anxiety and mind wandering, with effects remaining stable during 8-week follow-up. Improvements were greatest among participants with higher baseline symptom burden. Sleep disturbance improved selectively among individuals with poorer baseline sleep. Secondary outcomes, including rumination, perceived stress, social connectedness, and quality of life, also improved. Cognitive performance showed modest improvements primarily among lower-performing participants. Resting heart rate exhibited nominal reductions. ConclusionsAn ultra-brief, fully remote meditation intervention requiring 10 minutes per day was associated with sustained improvements in psychological functioning and smaller, baseline-dependent effects on cognition in a non-clinical population. These findings support digital delivery of low-dose meditation as a scalable preventive mental health strategy.
Hassell, N.; Marcenac, P.; Bationo, C. S.; Hirve, S.; Tempia, S.; Rolfes, M. A.; Duca, L. M.; Hammond, A.; Wijesinghe, P. R.; Heraud, J.-M.; Pereyaslov, D.; Zhang, W.; Kondor, R. J.; Azziz-Baumgartner, E.
Show abstract
Introduction: Modeling when influenza epidemics typically occur can help countries optimize surveillance, time clinical and public health interventions, and reduce the burden of influenza. Methods: We used influenza virus detections reported during 2011-2024 by 180 countries to the Global Influenza Surveillance and Response System, excluding COVID-19 pandemic impacted years (2020-2023). We analyzed data by calendar year (week 1-52) or shifted year (week 30-29) time windows, based on when most influenza detections occurred in each country. For countries with sufficient data, we computed generalized additive models (GAMs) of each country's weekly influenza-positive tests to smooth and impute time series distributions. From these GAMs, we calculated each country's normalized weekly influenza burden. Country-specific normalized time series were grouped using hierarchical k-means clustering reducing the Euclidean distance between time series within clusters. We calculated cluster-specific GAMs to estimate average seasonal timing. Countries without sufficient data were assigned to a cluster based on population-weighted latitudinal distance to a cluster's mean latitude. Results: We identified five clusters, or epidemic zones, from 111 countries with sufficient data. The influenza burden in epidemic zones A and B was consistent with a northern hemisphere pattern, with most influenza detections occurring during October-April (A) and September-March (B), while epidemic zones D and E were characterized by southern hemisphere-like seasonal timing, with most influenza burden occurring during May-November. Epidemic zone C had most influenza burden occurring during September-March; most countries assigned to this cluster were in the tropics. Conclusion: Epidemic zones may serve as a useful tool to strengthen and optimize influenza surveillance for global health decision-making (e.g., during vaccine strain composition discussions) and to guide country preparedness efforts for seasonal influenza epidemics, including the timing of enhanced surveillance, as well as the procurement and delivery of vaccines and antivirals.
Blankson, P.-K.; Hussien, S.; Idris, F.; Trevillion, G.; Aslam, A.; Afani, A.; Dunlap, P.; Chepkorir, J.; Melgarejo, P.; Idris, M.
Show abstract
BackgroundRecruitment remains a major barrier to timely clinical trial completion. Trialshub is an LLM-powered, chat-based platform intended to help users identify relevant trials and connect with coordinators to streamline recruitment workflows. ObjectiveTo evaluate the perceived usability and operational value of Trialshub, and identify implementation considerations for real-world deployment. MethodsA usability test was conducted at Morehouse School of Medicine for the Trialshub application. Purposively selected participants included clinical research coordinators and individuals with and without clinical trial search experience. Participants completed a pre-test survey assessing demographics, digital health information behaviors, and familiarity with AI tools, followed by a moderated usability session using a Trialshub prototype. Users completed scenario-based tasks (locating a breast cancer trial, reviewing results, and initiating coordinator contact) using a think-aloud protocol. Task ratings, screen recordings, and transcribed feedback were analyzed descriptively and thematically, and reported. ResultsParticipants reported high comfort with using digital tools and moderate-to-high familiarity with AI. Trialshubs chat-first design, guided prompts, and checklist-style eligibility display were perceived as intuitive and reduced cognitive load. Fast access to trials and the coordinator-contact workflow were viewed positively. Key usability issues included uncertainty at step transitions, insufficient cues for selecting results and next actions, and inconsistent system reliability (loading delays, errors, and broken trial detail pages). Participants also noted redundant questioning due to limited conversational memory, requested improved filtering/sorting, and clearer calls-to-action. All participants indicated that Trialshub has strong potential to meaningfully improve clinical trial processes. ConclusionsTrialshub shows promise for improving trial discovery and recruitment workflows, with identified design implications for real-world deployment.
Souza, F. L.; Cabral Souza, N.; Mendes, J. A. d. A.
Show abstract
IntroductionFamily Constellation Therapy (FCT) has been widely disseminated in clinical, public health, and judicial settings despite persistent concerns regarding its theoretical basis, safety, and the limited availability of rigorous randomised evidence supporting its clinical use. ObjectiveThe aim of this systematic review is to assess the effects of FCT across all clinical conditions, explicitly considering both benefits and harms; and summarise the characteristics of studies and intervention settings used in randomised controlled trials of FCT. MethodsFollowing a prospectively registered protocol (CRD420251136190), we conducted a systematic search of seven databases (PubMed, EMBASE, APA PsycInfo, CENTRAL, BVS, Web of Science, and CINAHL) and grey literature (ICTRP and ProQuest database) without language or date restrictions to identify published and unpublished randomised controlled trials of FCT. Study selection, data extraction, risk of bias (RoB 2), and certainty of evidence (GRADE) were performed in duplicate. Statistical analyses followed a prospectively registered analysis plan with prespecified criteria for data pooling and for handling analytical limitations. ResultsNo reliable evidence was found to support the use of FCT for any condition across both clinical and non-clinical samples. All trials included were judged to be at high risk of bias and all comparisons were rated as very low-certainty evidence. Concerns regarding potential adverse effects were identified, and the available data was insufficient to establish the effectiveness of the intervention, precluding any clinical recommendation. ConclusionClinicians, policymakers, and consumers should reconsider adopting FCT while reliable evidence is not available.
Martin, C. M.; henderson, i.; Campbell, D.; Stockman, K.
Show abstract
Background: The instability-plasticity framework proposes that multimorbidity trajectories periodically enter instability phases that are vulnerable to escalation but also potentially modifiable through relational intervention. Whether such phases commonly resolve without acute care, or predominantly progress to hospitalisation, has not been quantified at scale. Objective: To quantify instability window outcomes across a longitudinal monitoring cohort; to test whether the characteristics distinguishing admitted from resolved windows reflect within-patient trajectory dynamics or between-patient severity; and to characterise which patient-reported and operator-rated signals reliably precede admission, using both a curated pilot sub-cohort and the full monitoring cohort with an explicit cross-cohort comparison. Methods: Two complementary analyses were conducted on data from the MonashWatch Patient Journey Record (PaJR) relational telehealth system. Instability windows were identified algorithmically (>=2 consecutive calls with Total_Alerts >=3) across the full longitudinal dataset (16,383 calls, 244 patients, 2.5 years) and classified by linkage to ED and hospital admission data. Window characteristics were compared at window, patient, and paired within-patient levels. Pre-admission signal cascades were analysed in two configurations: a curated pilot sub-cohort (64 patients, 280 calls, +/-10-day window, 103 admissions, December 2016-September 2017) and the full monitoring cohort (175 patients, 1,180 pre-admission calls, +/-14-day window, December 2016-July 2019). A three-way cross-cohort comparison decomposed differences between the two configurations into pipeline and population effects. Results: 621 instability windows were identified across 157 patients (64% of the monitored cohort). 67.3% resolved without hospital admission or ED attendance, a rate stable across alert thresholds 1-5. In paired within-patient analysis (n = 70), duration in days (p = 0.002) and multi-domain breadth (p < 0.001) distinguished admitted from resolved windows; alert intensity did not. In the pilot sub-cohort, patient-reported illness prognosis (Q21) was the dominant pre-admission signal (GEE beta = +0.058, AUC = 0.647, p-BH = 0.018). This finding did not replicate in the full cohort: Q21 was non-significant (GEE beta = -0.008, p = 0.154, AUC = 0.507). Cross-cohort analysis identified selective curation of the pilot sub-cohort as the primary explanation. In the full cohort, six signals escalated significantly before admission after Benjamini-Hochberg correction: total alerts, health impairment (Q26), red alerts, self-rated health (Q3), patient concerns (Q1), and operator concern (Q34). Health impairment achieved the highest individual AUC (0.605) and showed the longest pre-admission lead. No individual signal exceeded AUC 0.61. Conclusions: Two thirds of instability phases resolve without hospitalisation, providing direct empirical support for trajectory plasticity as a clinically frequent phenomenon. Within the same patient, persistence - in duration and in the consistency of high-severity multi-domain flagging across calls - distinguishes trajectories that tip into admission from those that resolve. The Q21 signal reversal between cohorts illustrates how selective curation can produce compelling but non-replicable findings in monitoring research. In the full population, objective alert signals and operator judgement, rather than patient illness prognosis, carry the pre-admission signal
Matthewman, J.; Denaxas, S.; Langan, S.; Painter, J. L.; Bate, A.
Show abstract
Objectives: Large language models (LLMs) have shown promise in creating clinical codelists for research purposes, a time-consuming task requiring expert domain knowledge. Here, we evaluate the performance and assess failure modes of a retrieval augmented generation (RAG) approach to creating clinical codelists for the large and complex medical terminology used by the Clinical Practice Research Datalink (CPRD). Materials & Methods: We set up a RAG system using a database of word embeddings of the medical terminology that we created using a general-purpose word embedding model (gemini-embedding). We developed 7 reference codelists presenting different challenges and tagged required and optional codes. We ran 168 evaluations (7 codelists, 2 different database subsets, 4 models, 3 epochs each). Scoring was based on the omission of required codes, and inclusion of irrelevant codes. We used model-grading (i.e., grading by another LLM with the reference codelists provided as context) to evaluate the output codelists (a score of 0% being all incorrect and 100% being all correct). Results: We saw varying accuracy across models and codelists, with Gemini 3 Pro (Score 43%) generally performing better than Claude Sonnet 4.6 (36%), Gemini 3 Flash, and OpenAI GPT 5.2 performing worst (14%). Models performed better with shorter target codelists (e.g., Eosinophilic esophagitis with four codes, and Hidradenitis suppurativa with 14 codes). For example, all models consistently failed to produce a complete Wrist fracture codelist (with 214 required codes). We further present evaluation summaries, and failure mode evaluations produced by parsing LLM chat logs. Discussion: Besides demonstrating that a single-shot RAG approach is currently not suitable for codelist generation, we demonstrate failure modes including hallucinations, retrieval failures and generation failures where retrieved codes are not used. Conclusions: Our findings suggest that while RAG systems using current frontier LLMs may create correct clinical codelists in some cases, they still struggle with large and complex terminologies and codelists with a large number of codes. The failure mode we highlight can inform the creation of future workflows to avoid failures.
Liu, Y.; Chen, Z.; Suman, P.; Cho, H.; Prosperi, M.; Wu, Y.
Show abstract
This study developed a large language model (LLM)-based solution to identify people at HIV risk using electronic health records. We transformed structured EHR data, including demographics, diagnoses, and medications, into narrative descriptions ordered by visit date and applied GatorTron, a widely used clinical LLM trained on 82 billion words of de-identified clinical text. We compared GatorTron with traditional machine learning models, including LASSO and XGBoost. We identified a cohort with 54,265 individuals, where only 3,342 (6%) had new HIV diagnoses. Our LLM solution, based on GatorTron, achieved excellent performance, reaching an F1 score of 53.5% and an AUC of 0.88, comparable to traditional machine learning approaches. Subgroup analysis showed that, across age, sex, and race/ethnicity groups, both LLM and traditional models achieved AUCs above 0.82. Interpretability analyses showed broadly consistent patterns across LLM models and traditional machine learning models.
Sreekanth, J.; Salgado-Baez, E.; Edel, A.; Gruenewald, E.; Piper, S. K.; Spies, C.; Balzer, F.; Boie, S. D.
Show abstract
Routine ICU data offers valuable insights into daily physiological rhythms. While traditional methods assume these cycles maintain fixed periods and amplitudes, their inherent variability requires dynamic estimation of instantaneous trends. Wavelet transform effectively resolves circadian oscillations, especially for frequently measured vital parameters. We present novel extensions to the Continuous Wavelet Transform (CWT) power spectral analysis to better detect and segment subtle temporal patterns. Using this approach, we uncover hidden circadian patterns in cardiovascular vitals such as Heart Rate (HR) and Mean Blood Pressure (MBP) measured over five days in a retrospective cohort of 855 ICU patients. By quantifying non-stationary rhythms, we identified diurnal and semi-diurnal oscillations varying in period and power according to delirium and deep sedation. Notably, HR exhibits a clear diurnal and semi-diurnal rhythm when delirium is absent. Overall, our framework supports the CWT as a powerful tool for analyzing complex physiological signals, particularly vital signs. Crucially, our findings suggest that cardiovascular rhythm disruption can be associated with ICU-related delirium and deep sedation.
Mahmud, S.; Akter, M. S.; Ahamed, B.; Rahman, A. E.; El Arifeen, S.; Hossain, A. T.
Show abstract
Background Depressive symptoms among reproductive-aged women represent a major public health concern in low- and middle-income countries, yet systematic screening remains limited. In most population survey datasets, the low prevalence of depression results in severe class imbalance, which challenges conventional machine learning models. Therefore, we develop and evaluate a bagging-based ensemble machine learning framework to predict depressive symptoms among reproductive-aged women using highly imbalanced Bangladesh demographic and health survey (BDHS) 2022 data. Methods The sample comprised women aged 15-49 years drawn from BDHS 2022 data. Depressive symptoms were defined using the Patient Health Questionnaire (PHQ-9 [≥]10). Candidate predictors were drawn from sociodemographic, reproductive, nutritional, psychosocial, healthcare access, and environmental domains. Feature selection was performed using Elastic Net (EN), Random Forest (RF), and XGBoost model. Five classifiers (EN, RF, Support Vector Machine (SVM), K-nearest neighbors (KNN), and Gradient Boosting Machine (GBM)) were trained using both oversampling-based approaches and the proposed ensemble framework. Model performance was evaluated on an independent test set using accuracy, sensitivity, specificity, F1-score, and the normalized Matthews correlation coefficient (normMCC). Results Approximately 4.8% of women were identified with depressive symptoms. The proposed bagging ensemble framework consistently achieved more balanced predictive performance than oversampling-based models. Average normMCC improved from 0.540 (oversampling) to 0.557 (ensemble). RF and GBM ensembles demonstrated notable improvements in identifying depressive cases, while the EN ensemble achieved the highest overall performance and sensitivity. Threshold optimization yielded stable normMCC across models, indicating robust trade-offs between sensitivity and specificity. Conclusions Bagging-based ensemble learning provides a more robust and balanced approach than synthetic oversampling for predicting depressive symptoms in highly imbalanced population survey data. This approach has important implications for improving early identification and population-level mental health surveillance in resource-constrained settings.
Lafouti, M.; Feldman, L. S.; Hooshiar, A.
Show abstract
BackgroundManual video-based evaluation of surgical skills can be time-consuming and delays trainee feedback. Artificial intelligence (AI) offers opportunities to automate aspects of assessment while maintaining clinician oversight. We developed an interpretable spatiotemporal model that classifies surgical expertise directly from endoscopic video in standardized training tasks and generates saliency-based "highlights reels" showing the most influential frames. MethodsAn RGB pipeline combining InceptionV3 for spatial feature extraction and a gated recurrent unit (GRU) for temporal modeling was trained on the JIGSAWS dataset. The model outputs novice, intermediate, or expert labels. A rolling-window, low-latency evaluation at 30 fps with a stride of 10 frames was used. A motion-augmented variant fused RGB with optical-flow features. Spatial and temporal saliency maps highlighted key decision-making regions. ResultsThe RGB model achieved 95% accuracy (F1: 92% expert, 86% intermediate, 99% novice). Performance was strongest for novice and expert trials, while intermediate trials showed the lowest recall, consistent with greater ambiguity around the intermediate skill level. Saliency maps consistently emphasized tool-tissue interactions and peaked during technically demanding phases. The optical-flow variant underperformed, approximately 38% accuracy, which may reflect sensitivity to global camera motion and other non-informative motion patterns. ConclusionsThis interpretable AI pipeline accurately classifies surgical skill while producing intuitive visual highlights. Future work will refine highlight thresholds and validate on laparoscopic inguinal hernia repair for realworld deployment.
Phillips, R.; Wood, F.; Torrens-Burton, A.; Glennan, C.; Sellars, P.; Lowe, S.; Caffoor, A.; Hallingberg, B.; Gillespie, D.; Shepherd, V.; Poortinga, W.; Wahl-Jorgensen, K.; Williams, D.
Show abstract
Objectives Concerns about COVID-19 were a key driver of infection-prevention behaviour during the pandemic. The aim of this study was to gain an in-depth longitudinal understanding of the type and frequency of concerns experienced throughout the first two years of the COVID-19 pandemic. Design Content analysis of qualitative descriptions provided in a prospective longitudinal online survey as part of the COVID-19 UK Public Experiences (COPE) Study. Method At baseline (March/April 2020), when the UK entered its first national lockdown, 11,113 adults completed the COPE survey. Follow-up surveys were conducted at 3, 12, 18 and 24 months. Participants were recruited via the HealthWise Wales research registry and social media. Baseline surveys collected demographic and health data, and all waves included an open-ended question about COVID-19 concerns. Content analysis was used to identify the type and frequency of concerns at each time point. Results A total of 41,564 open-text responses were coded into six categories: personal harm (n=16,353), harm to others (n=11,464), social/economic impact (n=6,433), preventing transmission (n=4,843), government/media (n=1,048), and general concerns (n=1,423). The proportion of respondents reporting any concern declined from 75.3% at baseline to 65.8% at 24 months. Over time, concerns about personal harm increased (baseline 41.8% vs. 24-months 52.7%) whereas concerns about harm to others decreased (baseline 48.5% vs. 24-months 28.6%). Concerns about harm were also expressed in relation to clinical vulnerability, lack of trust in government/media, and perceived lack of adherence by others. These were balanced against concerns about wider social and economic impacts of restrictions. Conclusions Public concerns about COVID-19 evolved substantially over the first two years of the pandemic, reflecting changing perceptions of risk and responsibility. Monitoring concerns longitudinally is vital to help guide effective communication and behavioural interventions during future pandemics.
Yamga, E.; Goudrar, R.; Despres, P.
Show abstract
Introduction Secondary use of electronic health records (EHRs) often requires transforming raw clinical information into research-grade data. A central step in this process is EHR phenotyping - the identification of patient cohorts defined by specific medical conditions. Although numerous approaches exist, from ICD-based heuristics to supervised learning and large language models (LLMs), the field lacks standardized benchmark datasets, limiting reproducibility and hindering fair comparison across methods. Methods We developed the MIMIC-IV Phenotype Atlas (MIPA) dataset, an adaptation of MIMIC-IV that provides expert-annotated discharge summaries across 16 phenotypes of varying prevalence and complexity. Two independent clinicians reviewed and labeled the discharge summaries, resolving disagreements by consensus. In parallel, we implemented a processing pipeline that extracts multimodal EHR features and generates training, validation, and testing datasets for supervised phenotyping. To illustrate MIPA's utility, we benchmarked four phenotyping methods : ICD-based classifiers, keyword-driven Term Frequency-Inverse Document Frequency (TF-IDF) classifiers, supervised machine learning (ML) models, and LLMs on the task. Results The final MIPA corpus consists of 1,388 expert-annotated discharge summaries. Annotation reliability was high (mean document-level kappa = 0.805, mean label-level kappa = 0.771), with 91% of disagreements resolved through consensus review. MIPA provides high-quality phenotype labels paired with structured EHR features and predefined train/validation/test splits for each phenotype. In the benchmarking case study, LLMs achieved the highest F1 scores in 13 of 16 phenotypes, particularly for conditions requiring contextual interpretation of clinical narrative, while supervised ML offered moderate improvements over rule-based baselines. Conclusion MIPA is the first publicly available benchmark dataset dedicated to EHR phenotyping, combining expert-curated annotations, broad phenotype coverage, and a reproducible processing pipeline. By enabling standardized comparison across ICD-based heuristics, ML models, and LLMs, MIPA provides a durable reference resource to advance methodological development in automated phenotyping.
Pozo, M.; Pape, A.; Locke, B.; Pettine, W. W.
Show abstract
Timely identification of intensive care unit (ICU) patients likely to exit the unit can support anticipatory workflows such as chart review, eligibility screening, and patient outreach prior to transfer. Most ICU discharge prediction studies report discrimination and calibration, but these metrics do not quantify the decision consequences of acting on predictions. Using adult ICU admissions from MIMIC-IV, we represented each ICU stay as a sequence of daily clinical summaries and trained logistic regression, random forest, and XGBoost models to predict next day ICU transfer. Models achieved ROC AUC of 0.80-0.84 with differing calibration. We evaluated decision utility using decision curve analysis (DCA), where positive predictions trigger proactive review. Across thresholds, model guided strategies outperformed review-all, review-none, and a simple clinical rule. To translate net benefit into implementable operations, we modeled a clinical trial recruitment workflow with an 8 hour daily time constraint, incorporating chart review and consent effort. At a feasible operating threshold (0.23), the model flagged [~]23 charts/day and yielded [~]1.23 enrollments/day under conservative eligibility and consent assumptions. These results demonstrate that DCA provides a transparent framework for determining when ICU transfer predictions are worth using and how thresholds should be selected to align with real world workflow constraints. Data and Code AvailabilityThis research has been conducted using data from MIMIC-IV. Researchers can request access via PhysioNet. Implementation code is available upon request.
Basharat, A.; Hamza, O.; Rana, P.; Odonkor, C. A.; Chow, R.
Show abstract
Introduction Large language models are increasingly being used in healthcare. In interventional pain medicine, clinical reasoning is essential for procedural planning. Prior studies show that simplified prompts reduce clinical detail in AI-generated responses. It remains unclear whether this reflects knowledge loss or simply prompt-driven suppression of information. Methods We performed a controlled comparative study using 15 standardized low back pain questions representing common interventional pain questions. Each question was submitted to ChatGPT under three conditions, professional-level prompt (DP), fourth-grade reading-level prompt (D4), and clinician-directed rewriting of the D4 response to a medical level (U4[->]MD). No follow-up prompting was allowed. Three physicians independently rated responses for accuracy using a 0-2 ordinal scale. Clinical completeness was determined by consensus. Word count and Flesch-Kincaid Grade Level (FKGL) were also measured. Paired t-tests compared conditions. Results Accuracy was highest with professional prompting (1.76). Accuracy declined with the fourth-grade prompt (1.33; p = 0.00086). When simplified responses were rewritten for clinicians, accuracy returned to baseline (1.76; p {approx} 1.00 vs DP). Clinical completeness followed the same pattern showing DP 80.0%, D4 6.7%, U4[->]MD 73.3%. Fourth-grade responses were shorter and less complex. Upscaled responses were more complex and similar in length to professional responses. Inter-rater reliability was low (Fleiss {kappa} = 0.17), but trends were consistent across conditions. Conclusions Reduced clinical detail under simplified prompts appears to reflect constrained output rather than loss of knowledge. Clinician-directed reframing restores omitted content. LLM performance in interventional pain depends strongly on prompt design and intended audience.
Van, T. A.
Show abstract
BackgroundType 2 diabetes mellitus (T2DM) is a leading global public health challenge. Machine learning (ML) combined with Explainable AI (XAI) is increasingly applied to T2DM risk prediction, but the field lacks a quantitative overview of methodological trends and integration gaps. MethodsWe present a structured synthesis and critical analysis of the XAI literature on T2DM risk prediction, combining (i) quantitative bibliometric analysis of a two-database corpus (N = 2,048 documents from Scopus and PubMed/MEDLINE, deduplicated via a transparent three-tier pipeline) and (ii) an in-depth selective review of 15 highly cited papers. Reporting follows PRISMA 2020, adapted for metadata-based synthesis; analyses include keyword frequency, rule-based thematic clustering, and publication trend analysis. ResultsThe field grew rapidly, from 36 documents (2020) to 866 (2025). SHAP and LIME dominate XAI methods; XGBoost and Random Forest dominate ML models. Critically, KG/GNN terms appeared in only 17 documents ([~]0.83%) compared with 906 for XAI methods, a 53.3:1 disparity. This gap is consistent across both databases, which share 33.2% of their records, ruling out a single-database artifact. The selective review confirmed that none of the 15 highly cited papers combined all three components, ML, XAI, and KG, in T2DM risk prediction. ConclusionsThe XAI for T2DM risk prediction field exhibits a clinical interpretability gap: statistical explanations are rarely linked to structured clinical pathways. We propose a three-layer conceptual framework (Predictive [->] Explainability [->] Knowledge) that integrates KG as a supplementary semantic layer, with potential applications in clinical decision support and population-level screening. The framework does not perform true causal inference but structures explanations around established pathophysiological knowledge. This study contributes a transferable methodology and a quantified research gap to guide future work integrating ML, XAI, and structured medical knowledge.
Hassani, A.; Pecar, K.; Soliman, M.; Bunyon, P.; Ellinger, C.; Tulysewskid, G.; Croft, J.; Carillo, C.; Wewegama, G.; du Plessis-Schneider, S.; Estevez, J. J.
Show abstract
Background Individuals experiencing or at risk of homelessness face substantial barriers to preventive eye care that are poorly addressed by standard service models. Interdisciplinary optometry-social work collaboration offers a rights-based approach to improving engagement and continuity of care. Methods A convergent mixed-methods study was conducted between February and August 2024 at a multidisciplinary community centre. Clients experiencing or at risk of homelessness received integrated optometry and social work assessment and were prioritised as high, medium, or low based on combined clinical and social risk. Social work follow-up was guided by the Triple Mandate and W-Questions framework. Quantitative data were summarised using mean (SD), median [IQR], or n (%). Qualitative case notes were analysed using content analysis with inductive coding and secondary review for consistency. Results A total of 165 clients had priority categories coded (high: 68; medium: 47; low: 154). Demographic data were available for 132 clients (60% male; mean age 49.5 years [SD 16]); 27% had not completed high school, 89% reported weekly income below AUD 1000, and 28% had vision impairment. Two hundred forty-five case-note entries were consolidated into 146 unique records. SMS (46%) and phone calls (38%) were the most documented contact methods, although only 21% of calls were answered; missed calls (13%) and disconnected numbers (7%) were common. Multi-modal contact was more frequently documented for higher-priority clients. Appointment assistance was the most recorded facilitator (71%), while rights-based supports, including interpreter and transport assistance, were infrequently documented (<=5%). Qualitative analysis identified unstable communication, reliance on informal supports, and service fragmentation as key influences on recall outcomes. Conclusion This study supports an interdisciplinary, rights-based optometry-social work model to address barriers to preventive eye care among people experiencing or at risk of homelessness. Embedding structured handovers and tiered recall processes within community-based services may strengthen continuity and accountability for high-priority clients. Future implementation should evaluate outcomes related to equity of reach, service integration, and sustained engagement in care.
Kim, S.; Guo, Y.; Sutari, S.; Chow, E.; Tam, S.; Perret, D.; Pandita, D.; Zheng, K.
Show abstract
Social determinants of health (SDoH) are important for clinical care, but it remains unclear how much AI-captured social context is preserved after clinician editing in ambient documentation workflows. We retrospectively analyzed 75,133 paired ambient AI-drafted and clinician-finalized note sections from ambulatory care at a large academic health system. Using a rule-based NLP pipeline, we extracted 21 SDoH categories and quantified retention, deletion, and addition. SDoH appeared in 25.2% of AI drafts versus 17.2% of final notes. At the mention level, AI captured 29,991 SDoH mentions, of which 45.1% were deleted, 54.9% were retained with clinicians adding 3,583 new mentions. Insurance and marital status were most often deleted, whereas substance use and physical activity were more often retained. Deletion patterns also varied by specialty, supporting the need for specialty-aware ambient AI systems.
Mwaka, E. S.; Nabukenya, S.; Kasiita, V.; Bagenda, G.; Rutebemberwa, E.; Ali, J.; Gibson, D.
Show abstract
Background: Mobile phone-based tools are increasingly used to collect data on non-communicable disease (NCD) risk factors, particularly in low-resource settings where traditional data collection systems face operational and infrastructural constraints. This study examined stakeholder perspectives on the use of enhanced mobile phone-based capabilities to support the collection of public health surveillance data on NCD risk factors in low-resource settings. Methods: An exploratory qualitative study was conducted between November 2022 and July 2023. Twenty in-depth interviews were conducted with public health specialists, ethicists, NCD researchers, health informaticians, and policy makers in Uganda. Thematic analysis was used to interpret the results. Results: Four themes emerged from the data, including benefits of using mobile phone capabilities for NCD risk factor data collection; ethical, legal, and social implications; perceived challenges of using such mobile phone capabilities; and proposed solutions to improve the utility of phone-based capabilities in data collection on NCD risk factors. Participants recognized the potential of mobile technologies to improve data collection efficiency and expand access to hard-to-reach populations. However, concerns emerged regarding inadequate informed consent, risks to privacy and confidentiality, unclear data ownership, and vulnerabilities created by inconsistent enforcement of data protection laws. Social concerns included low digital literacy, unequal access to mobile devices, and fear of stigmatization. Participants emphasized the need for transparent communication, robust data governance, and community engagement. Conclusion: Mobile phone-based systems can strengthen the collection of NCD risk factor data in low-resource settings; however, their benefits depend on addressing key ethical, legal, and social challenges. To ensure responsible deployment, digital health initiatives must prioritize participant autonomy, data protection, equity, and trust building. Integrating contextualized ethical, legal, and social considerations into design and policy frameworks will be essential to leveraging mobile technologies in ways that support inclusive and effective NCD prevention and control.